我们提出Dave Aquatic Virtual Environals(Dave),这是用于水下机器人,传感器和环境的开源仿真堆栈。传统的机器人模拟器并非旨在应对海洋环境带来的独特挑战,包括但不限于在空间和时间上变化的环境条件,受损或具有挑战性的感知以及在通常未探索的环境中数据的不可用。考虑到各种传感器和平台,对于不可避免地抵制更广泛采用的特定用例,车轮通常会重新发明。在现有模拟器的基础上,我们提供了一个框架,以帮助加快算法的开发和评估,否则这些算法需要在海上需要昂贵且耗时的操作。该框架包括基本的构建块(例如,新车,水跟踪多普勒速度记录仪,基于物理的多微型声纳)以及开发工具(例如,动态测深的产卵,洋流),使用户可以专注于方法论,而不是方法。比软件基础架构。我们通过示例场景,测深数据导入,数据检查的用户界面和操纵运动计划以及可视化来演示用法。
translated by 谷歌翻译
We consider a long-term average profit maximizing admission control problem in an M/M/1 queuing system with a known arrival rate but an unknown service rate. With a fixed reward collected upon service completion and a cost per unit of time enforced on customers waiting in the queue, a dispatcher decides upon arrivals whether to admit the arriving customer or not based on the full history of observations of the queue-length of the system. \cite[Econometrica]{Naor} showed that if all the parameters of the model are known, then it is optimal to use a static threshold policy - admit if the queue-length is less than a predetermined threshold and otherwise not. We propose a learning-based dispatching algorithm and characterize its regret with respect to optimal dispatch policies for the full information model of \cite{Naor}. We show that the algorithm achieves an $O(1)$ regret when all optimal thresholds with full information are non-zero, and achieves an $O(\ln^{3+\epsilon}(N))$ regret in the case that an optimal threshold with full information is $0$ (i.e., an optimal policy is to reject all arrivals), where $N$ is the number of arrivals and $\epsilon>0$.
translated by 谷歌翻译
Active target sensing is the task of discovering and classifying an unknown number of targets in an environment and is critical in search-and-rescue missions. This paper develops a deep reinforcement learning approach to plan informative trajectories that increase the likelihood for an uncrewed aerial vehicle (UAV) to discover missing targets. Our approach efficiently (1) explores the environment to discover new targets, (2) exploits its current belief of the target states and incorporates inaccurate sensor models for high-fidelity classification, and (3) generates dynamically feasible trajectories for an agile UAV by employing a motion primitive library. Extensive simulations on randomly generated environments show that our approach is more efficient in discovering and classifying targets than several other baselines. A unique characteristic of our approach, in contrast to heuristic informative path planning approaches, is that it is robust to varying amounts of deviations of the prior belief from the true target distribution, thereby alleviating the challenge of designing heuristics specific to the application conditions.
translated by 谷歌翻译
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency. As the systems grow in complexity, fine-tuning architectural parameters across multiple sub-systems (e.g., datapath, memory blocks in different hierarchies, interconnects, compiler optimization, etc.) quickly results in a combinatorial explosion of design space. This makes domain-specific customization an extremely challenging task. Prior work explores using reinforcement learning (RL) and other optimization methods to automatically explore the large design space. However, these methods have traditionally relied on single-agent RL/ML formulations. It is unclear how scalable single-agent formulations are as we increase the complexity of the design space (e.g., full stack System-on-Chip design). Therefore, we propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem. The key idea behind using MARL is an observation that parameters across different sub-systems are more or less independent, thus allowing a decentralized role assigned to each agent. We test this hypothesis by designing domain-specific DRAM memory controller for several workload traces. Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines such as Proximal Policy Optimization and Soft Actor-Critic over different target objectives such as low power and latency. To this end, this work opens the pathway for new and promising research in MARL solutions for hardware architecture search.
translated by 谷歌翻译
Generalizability of time series forecasting models depends on the quality of model selection. Temporal cross validation (TCV) is a standard technique to perform model selection in forecasting tasks. TCV sequentially partitions the training time series into train and validation windows, and performs hyperparameter optmization (HPO) of the forecast model to select the model with the best validation performance. Model selection with TCV often leads to poor test performance when the test data distribution differs from that of the validation data. We propose a novel model selection method, H-Pro that exploits the data hierarchy often associated with a time series dataset. Generally, the aggregated data at the higher levels of the hierarchy show better predictability and more consistency compared to the bottom-level data which is more sparse and (sometimes) intermittent. H-Pro performs the HPO of the lowest-level student model based on the test proxy forecasts obtained from a set of teacher models at higher levels in the hierarchy. The consistency of the teachers' proxy forecasts help select better student models at the lowest-level. We perform extensive empirical studies on multiple datasets to validate the efficacy of the proposed method. H-Pro along with off-the-shelf forecasting models outperform existing state-of-the-art forecasting methods including the winning models of the M5 point-forecasting competition.
translated by 谷歌翻译
我们为多机器人任务计划和分配问题提出了一种新的公式,该公式结合了(a)任务之间的优先关系; (b)任务的协调,允许多个机器人提高效率; (c)通过形成机器人联盟的任务合作,而单独的机器人不能执行。在我们的公式中,任务图指定任务和任务之间的关系。我们在任务图的节点和边缘上定义了一组奖励函数。这些功能对机器人联盟规模对任务绩效的影响进行建模,并结合一个任务的性能对依赖任务的影响。最佳解决此问题是NP-HARD。但是,使用任务图公式使我们能够利用最小成本的网络流量方法有效地获得近似解决方案。此外,我们还探索了一种混合整数编程方法,该方法为问题的小实例提供了最佳的解决方案,但计算上很昂贵。我们还开发了一种贪婪的启发式算法作为基准。我们的建模和解决方案方法导致任务计划,即使在与许多代理商的大型任务中,也利用任务优先关系的关系以及机器人的协调和合作来实现高级任务绩效。
translated by 谷歌翻译
以前在外围防御游戏中的研究主要集中在完全可观察到的环境上,在该环境中,所有玩家都知道真正的玩家状态。但是,这对于实际实施而言是不现实的,因为捍卫者可能必须感知入侵者并估计其国家。在这项工作中,我们在照片真实的模拟器和现实世界中研究外围防御游戏,要求捍卫者从视力中估算入侵者状态。我们通过域随机化训练一个基于机器学习的系统,用于入侵者姿势检测,该系统汇总了多个视图,以减少状态估计错误并适应防御策略来解决此问题。我们新介绍性能指标来评估基于视觉的外围防御。通过广泛的实验,我们表明我们的方法改善了国家的估计,最终在两场比赛中的VS-1-Intruder游戏和2-Fefenders-VS-1-Intruder游戏中最终进行了外围防御性能。
translated by 谷歌翻译
这项研究提供了一个新颖的框架,以根据开源数据估算全球城市的公共交通巴士的经济,环境和社会价值。电动巴士是替代柴油巴士以获得环境和社会利益的引人注目的候选人。但是,评估总线电气化价值的最先进模型的适用性受到限制,因为它们需要可能难以购买的总线运营数据的细粒和定制数据。我们的估值工具使用通用过境饲料规范,这是全球运输机构使用的标准数据格式,为制定优先级排序策略提供了高级指导,以使总线机队电气化。我们开发了物理知识的机器学习模型,以评估每种运输途径的能耗,碳排放,健康影响以及总拥有成本。我们通过对大波士顿和米兰大都会地区的公交线路进行案例研究来证明我们的工具的可扩展性。
translated by 谷歌翻译
机械系统自然地在描述其固有对称性的主束上演变。随之而来的配置歧管分解为对称组和内部形状空间,为许多机器人和生物系统的运动提供了深刻的见解。另一方面,差异平坦的属性已实现了各种机器人系统的有效,有效的计划和控制算法。然而,为任意机器人系统找到平坦输出的实际手段仍然是一个悬而未决的问题。在这项工作中,我们在这两个域之间展示了令人惊讶的新连接,这是首次使用对称性直接使用对称性来构建平面输出。我们为捆绑包的琐碎化提供了足够的条件,其中组变量本身是平坦的输出。我们将其称为几何扁平输出,因为它是均衡的(即保持对称性的),并且通常是全局或几乎全球的,因此通常不受其他平坦输出不享受的属性。在这样的琐碎化中,很容易解决运动计划问题,因为组变量的给定轨迹将充分确定精确实现此运动的形状变量的轨迹。我们为机器人系统提供了部分目录,该目录具有几何扁平输出,并为平面火箭,平面空中操纵器和四极管提供了示例。
translated by 谷歌翻译
近年来,地标复合物已成功地用于无定位和无公制的自主探索,并使用一组受GPS污染的环境中的一组感应有限的限制和沟通有限的机器人。为了确保快速而完整的探索,现有的作品对环境中地标的密度和分布做出了假设。这些假设可能过于限制,尤其是在可能被破坏或完全缺失的危险环境中。在本文中,我们首先提出了一个深入的加强学习框架,用于在具有稀疏地标的环境中,同时减少客户服务器交流的环境中的多代理合作探索。通过利用有关部分可观察性和信用分配的最新发展,我们的框架可以为多机器人系统有效地培训勘探政策。该政策从范围和分辨率有限的接近传感器基于近距离传感器的行动中获得个人奖励,该传感器与小组奖励相结合,以鼓励通过观察0-,1-维度和2维的简单来鼓励地标综合体的协作探索和建设。此外,我们采用三阶段的课程学习策略来通过逐渐增加随机障碍并破坏随机地标来减轻奖励稀疏性。模拟中的实验表明,我们的方法在不同环境之间具有稀疏地标的效率中的最先进的地标复杂探索方法。
translated by 谷歌翻译